Maximizing surgical success in epilepsy: seizure-onset zone resection is crucial, while extending beyond offers no added benefit (supplementary file 2)

Chifaou Abdallah1,2,†,*, Zhengchen Cai1,†, Saba Rammal1, Olivier Aron3,4, Nicolás Von Ellenrieder1, Gang Chen5, Sana Hannan6, John Thomas7,8, Philippe Kahane9, Lorella Minotti9, Stephan Chabardes9, Sophie Colnat-Coulbois3,4, Louis Maillard3,4, Jeff Hall1, Francois Dubeau1, Jean Gotman1, Christophe Grova2,10,11,‡, Birgit Frauscher1,7,8,*,‡

1. Montreal Neurological Institute and Hospital, McGill University, Montréal, Québec H3A 2B4, Canada.

2. Multimodal Functional Imaging Lab, Biomedical Engineering Dpt, McGill University, Montréal, Québec, Canada.

3. Department of Neurology, University Hospital of Nancy, Lorraine University, F-54000 Nancy, France.

4. Research Center for Automatic Control of Nancy (CRAN), Lorraine University, CNRS, UMR, 7039 Vandoeuvre, France.

5. Scientific and Statistical Computing Core, National Institute of Mental Health, USA.

6. Department of Biomedical and Life Sciences, Lancaster University, Lancaster, UK.

7. Department of Biomedical Engineering, Duke Pratt School of Engineering, Durham, NC, USA.

8. Department of Neurology, Duke University Medical Center, Durham, NC, USA.

9. CHU Grenoble Alpes, Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France.

10. Multimodal Functional Imaging Lab, Department of Physics, Concordia University, Montréal, Québec, Canada.

11. Concordia School of Health/PERFORM Centre, Concordia University, Montréal, Québec, Canada.

† These authors contributed equally to this work.

‡ These authors jointly directed the work.

* Corresponding Author: Chifaou Abdallah Birgit Frauscher

Preface

This supplementary material summarizes the analysis conducted for the manuscript “Maximizing surgical success in epilepsy: seizure-onset zone resection is crucial, while extending beyond offers no added benefit .”

Instructions: Each results section contains the corresponding computation code chunk. All code chunks and printed messages are folded by default. They can be unfolded to see the details. This provides a more coherent way of presenting results and code together. Highlighted text is in orange, and hyperlinks to references and sections are in light blue.

Library

Library
library(tidyverse)
library(ggplot2)
library(stringr)
library(ggpubr)
library(scales)
library(ragg)
library(readxl)
library(gt)
library(ggstatsplot)
library(brms)
library(tidybayes)
library(webr)
library(DT)
library(dagitty)
library(ggdag)
library(kableExtra)
library(moonBook)
library(ggforce)
library(ggsci)
library(marginaleffects)
library(ggrepel)
library(brainspriteR)
library(readr)

Data wrangling

Load data and prepare variable values and levels. Only Engel I is considered as good outcome and all others are considered as poor outcome. All categorical variables are encoded as a factors. Note that for the convenience of setting prior and numerical stability of model fitting, the continuous variables are standardized.

Code
figdpi <- 300
soz_size <- 5
soz_or_sozp1 <- "onset" # onset or earlypropagation

df_postreviewed <-
  read_csv(
    paste0(
      "E:/SOZCavity/derivatives/results/allvariable_radius-",
      soz_size,
      "mm_SOZ-onset_postreviewed_CA.csv"
      )
    )

if (soz_or_sozp1 == "onset"){

  df_auto <-
  read_csv(
    paste0(
      "E:/SOZCavity/derivatives/results/allvariable_radius-",
      soz_size,
      "mm_SOZ-onset.csv"
    )
  )
    
} else if (soz_or_sozp1 == "earlypropagation") {

  df_auto <-
    read_csv(
      paste0(
        "E:/SOZCavity/derivatives/results/allvariable_radius-",
        soz_size,
        "mm_SOZ-earlypropagation.csv"
        )
      )
  
  df_postreviewed <- df_auto %>% 
    select(-c(laterality_soz, laterality_seeg, epilepsy_type, 
              eloquent, focality, pathology)) %>% 
    left_join(df_postreviewed %>% select(sub, laterality_soz, laterality_seeg, 
                             epilepsy_type, eloquent, focality, pathology, phase12, lvfa), by = "sub")
  
} else {
  stop("Error: wrong SOZ type")
}

df <- df_postreviewed %>%
  mutate(phase12 = str_replace_all(phase12, "completely", "complete"),
         phase12 = str_replace_all(phase12, "partially", "partial")) %>% 
  select(
    sub,
    cavity_vol,
    soz_vol,
    overlap_vol,
    overlap_ratio,
    nonsoz_ratio,
    contact_ratio,
    outcome,
    outcome_simplified,
    outcome_detailed,
    laterality_seeg,
    epilepsy_type,
    eloquent,
    focality,
    pathology,
    phase12,
    lvfa
  ) %>%
  rename(laterality = laterality_seeg) %>%
  mutate(site = case_when(
    str_detect(sub, "ep") ~ "MNI",
    str_detect(sub, "gp") ~ "Grenoble",
    str_detect(sub, "np") ~ "Nancy"
  )) %>%
  mutate(
    eloquent = str_to_lower(eloquent),
    focality = str_to_lower(focality),
    epilepsy_type = str_remove(str_to_lower(epilepsy_type), "lobe")
  ) %>%
  mutate(outcome = str_remove(outcome_detailed, "[ABCD]")) %>% 
  mutate(outcome_simplified = ifelse(outcome == "I", "good outcome", "poor outcome")) %>% 
  #mutate(outcome_simplified = ifelse(outcome_detailed == "IA", "good outcome", "poor outcome")) %>% 
  mutate(epilepsy_type = case_when(epilepsy_type == "frontal" ~ "FLE",
                                   epilepsy_type == "frontal+" ~ "FLE+",
                                   epilepsy_type == "temporal" ~ "TLE",
                                   epilepsy_type == "temporal+" ~ "TLE+",
                                   epilepsy_type %in% c("parietal", "occipital", "carrefour") ~ "PE",
                                   epilepsy_type %in% c("parietal+", "occipital+") ~ "PE+",
                                   epilepsy_type == "insular" ~ "OIE")) %>%
  mutate(pathology = ifelse(pathology == "hippocampal sclerosis", "HS", pathology)) %>%
mutate(pathology = ifelse(pathology == "other", "LEAT|VM", pathology)) %>%
  mutate(
    outcome = factor(outcome, levels = c("IV", "III", "II", "I")),
    outcome_simplified = factor(outcome_simplified, levels = c("poor outcome", "good outcome")),
    sub = factor(sub),
    site = factor(site),
    laterality = factor(laterality),
    eloquent = factor(eloquent),
    epilepsy_type = factor(str_remove_all(epilepsy_type, " ")),
    focality = factor(focality),
    pathology = factor(pathology),
    phase12 = factor(phase12, levels = c("discordant", "partial", "complete")),
    lvfa = factor(lvfa)
  )

if (soz_or_sozp1 == "onset"){
  write_csv(df, 
            paste0(
              "E:/SOZCavity/derivatives/results/allvariable_radius-",
              soz_size,
              "mm_SOZ-onset_final.csv"
              ),
            
            )
} else {
    write_csv(df, 
            paste0(
              "E:/SOZCavity/derivatives/results/allvariable_radius-",
              soz_size,
              "mm_SOZ-earlypropagation_final.csv"
              )
            )
}


cavity_vol_mean <- mean(df$cavity_vol)
cavity_vol_sd <- sd(df$cavity_vol)
soz_vol_mean <- mean(df$soz_vol)
soz_vol_sd <- sd(df$soz_vol)
overlap_vol_mean <- mean(df$overlap_vol)
overlap_vol_sd <- sd(df$overlap_vol)
overlap_ratio_mean <- mean(df$overlap_ratio)
overlap_ratio_sd <- sd(df$overlap_ratio)
nonsoz_ratio_mean <- mean(df$nonsoz_ratio)
nonsoz_ratio_sd <- sd(df$nonsoz_ratio)
contact_ratio_mean <- mean(df$contact_ratio)
contact_ratio_sd <- sd(df$contact_ratio)

df_norm <- df %>%
  mutate(cavity_vol = (cavity_vol - cavity_vol_mean) / cavity_vol_sd,
         soz_vol = (soz_vol - soz_vol_mean) / soz_vol_sd,
         overlap_vol = (overlap_vol - overlap_vol_mean) / overlap_vol_sd,
         overlap_ratio = (overlap_ratio - overlap_ratio_mean) / overlap_ratio_sd,
         nonsoz_ratio = (nonsoz_ratio - nonsoz_ratio_mean) / nonsoz_ratio_sd,
         contact_ratio = (contact_ratio - contact_ratio_mean) / contact_ratio_sd)


df_fit <- df_norm %>% 
  filter(!is.na(pathology)) %>% 
  filter(pathology != "normal") %>% 
  mutate(interaction = str_c(laterality, epilepsy_type, 
                             eloquent, focality, pathology, phase12, lvfa, sep = "_")) %>% 
  mutate(interaction = factor(interaction)) %>% 
  select(outcome_simplified, soz_vol, cavity_vol, overlap_ratio, nonsoz_ratio,
         laterality, epilepsy_type, 
         eloquent, focality, pathology, phase12, lvfa, interaction) %>% 
  mutate(pathology = factor(pathology))

1 Part I: Conventional analysis

1.1 Univariate analysis for categorical variables

Conventional statistical tests on categorical variables are conducted to see if the count distribution of two categorical variables are independent of each other. We report the results of the Pearson’s Chi-squared test (in subtitle). To interpret these results, for instance, Pearson’s χ2-test of independence reveals that, across 200 patients, there is no statistically significant association between focality and the surgical outcome. Actually, none of these categorical variables show statistically significant association with the surgical outcome, except for phase I-II.

1.2 Univariate analysis for continuous variables

Non-parametric between-group tests are used to examine whether continuous variables differ between surgical outcomes. The SOZ-resected fraction shows a considerable difference between good and poor outcomes, with a median difference of approximately 16%. In contrast, the differences in SOZ volume, cavity volume, and non-SOZ portion of the cavity between good and poor outcomes are subtle.

1.3 Multivariate analysis with all variables

Logistic regression is conducted using all variables, and ANOVA (Analysis of Variance) is performed to test the significance of each variable in the logistic regression. Only the SOZ-resected fraction and phase I-II are statistically significant, indicating these two variables have a significant positive association with the outcome.

Conventional glm() model code
mod_glm <- glm(outcome_simplified ~ 
                1 + laterality + epilepsy_type + 
                eloquent + focality + pathology + 
                phase12 + lvfa + cavity_vol + 
                soz_vol + overlap_ratio + nonsoz_ratio, 
                family = binomial(link = "logit"), df_fit)

parameters::parameters(mod_glm)
Conventional glm() model fitting results
Parameter                | Log-Odds |   SE |        95% CI |     z |     p
--------------------------------------------------------------------------
(Intercept)              |     0.20 | 0.97 | [-1.69, 2.15] |  0.21 | 0.835
laterality [unilateral]  |     0.24 | 0.40 | [-0.54, 1.04] |  0.60 | 0.546
epilepsy type [FLE+]     |     0.10 | 0.75 | [-1.36, 1.60] |  0.13 | 0.894
epilepsy type [OIE]      |    -0.84 | 0.85 | [-2.53, 0.82] | -0.99 | 0.321
epilepsy type [PE]       |     1.43 | 1.29 | [-0.80, 4.63] |  1.11 | 0.267
epilepsy type [PE+]      |    -0.95 | 1.48 | [-4.34, 1.89] | -0.64 | 0.523
epilepsy type [TLE]      |    -0.98 | 0.67 | [-2.32, 0.34] | -1.45 | 0.148
epilepsy type [TLE+]     |    -0.71 | 0.83 | [-2.39, 0.90] | -0.86 | 0.390
eloquent [yes]           |     0.57 | 0.42 | [-0.25, 1.43] |  1.35 | 0.179
focality [multifocal]    |     0.46 | 0.80 | [-1.10, 2.06] |  0.58 | 0.563
focality [regional]      |     0.10 | 0.50 | [-0.90, 1.09] |  0.20 | 0.844
focality [widespread]    |    -0.34 | 1.02 | [-2.39, 1.65] | -0.34 | 0.736
pathology [gliosis|scar] |    -0.99 | 0.59 | [-2.17, 0.15] | -1.69 | 0.092
pathology [HS]           |    -0.02 | 0.70 | [-1.39, 1.36] | -0.03 | 0.980
pathology [LEAT|VM]      |     0.63 | 0.96 | [-1.19, 2.65] |  0.65 | 0.514
pathology [other MCD]    |    -0.62 | 0.55 | [-1.71, 0.45] | -1.14 | 0.254
phase12 [partial]        |    -0.07 | 0.62 | [-1.30, 1.16] | -0.11 | 0.912
phase12 [complete]       |     1.09 | 0.71 | [-0.29, 2.50] |  1.55 | 0.122
lvfa [yes]               |     0.26 | 0.45 | [-0.62, 1.15] |  0.58 | 0.559
cavity vol               |    -0.12 | 0.20 | [-0.54, 0.29] | -0.58 | 0.563
soz vol                  |    -0.29 | 0.32 | [-0.96, 0.29] | -0.90 | 0.366
overlap ratio            |     0.61 | 0.24 | [ 0.16, 1.11] |  2.54 | 0.011
nonsoz ratio             |     0.42 | 0.23 | [-0.03, 0.90] |  1.79 | 0.073
ANOVA test code
car::Anova(mod_glm, type = 2)
ANOVA test results
Analysis of Deviance Table (Type II tests)

Response: outcome_simplified
              LR Chisq Df Pr(>Chisq)   
laterality      0.3659  1   0.545252   
epilepsy_type   6.9229  6   0.328039   
eloquent        1.8583  1   0.172821   
focality        1.0404  3   0.791473   
pathology       5.8560  4   0.210164   
phase12         6.2881  2   0.043107 * 
lvfa            0.3415  1   0.558989   
cavity_vol      0.3357  1   0.562319   
soz_vol         0.8863  1   0.346480   
overlap_ratio   7.1195  1   0.007625 **
nonsoz_ratio    3.3609  1   0.066761 . 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

2 Part II: Causal effect

Below is a complete DAG (Figure 1 in the manuscript), with each node and the effect defined using our domain knowledge.

2.1 SOZ-resected fraction

SOZ and cavity volumes are the adjustment variables in the regression to investigate the direct causal effect of SOZ-resected fraction on the outcome.

DAG query SOZResected \rightarrow Outcome direct effect
print_adjustment_sets(adjustmentSets(dag_full, "SOZResected", "Outcome", effect = "direct"))
[[1]]
[1] "CavityVolume" "SOZVolume"   

In fact, Non-SOZ-resected fraction can be added along with SOZ and cavity volumes in the model considering: 1) it is convenient as the effects of SOZ-resected fraction and nonSOZ portion of cavity can be estimated from the same model and 2) it helps increase the precision of the effects estimation. If we denote SOZ-resected as “X”, Outcome as “Y”, and combine SOZVolume and CavityVolume into one node “Z”, while denoting unmeasured nodes (e.g., Surgery) as “U”, this is equivalent to the model 2 in Cinelli, Forney, and Pearl (2020), suggesting that Z is a good control. In the meantime, if we denote SOZ-resected as “X” and Non-SOZ-resected as “Z”, this is equivalent to the model 8 in Cinelli, Forney, and Pearl (2020), suggesting that Z is a good control, potentially improving precision. The following DAG query also shows that the nonSOZ portion of the cavity, SOZ, and cavity volumes are valid adjustment variables for the effect of the SOZ-resected fraction.

DAG query SOZResected \rightarrow Outcome direct effect
print(isAdjustmentSet(dag_full, c("NonSOZResected", "CavityVolume", "SOZVolume"), "SOZResected", "Outcome"))
[1] TRUE
Model code
mod_fit_causal1 <- brm(
  data = df_norm, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + cavity_vol + soz_vol + overlap_ratio + nonsoz_ratio,
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)

mod_fit_causal1
Model fitting results and diagnostics
 Family: bernoulli 
  Links: mu = logit 
Formula: outcome_simplified ~ 1 + cavity_vol + soz_vol + overlap_ratio + nonsoz_ratio 
   Data: df_norm (Number of observations: 200) 
  Draws: 4 chains, each with iter = 3500; warmup = 1000; thin = 1;
         total post-warmup draws = 10000

Regression Coefficients:
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept         0.25      0.15    -0.04     0.55 1.00     9655     7160
cavity_vol       -0.07      0.14    -0.35     0.22 1.00     7262     6843
soz_vol          -0.24      0.15    -0.55     0.06 1.00     7657     6698
overlap_ratio     0.41      0.15     0.12     0.70 1.00     7182     7103
nonsoz_ratio      0.06      0.15    -0.24     0.36 1.00     6902     6889

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Marginalization and visualization code
mfx <- mod_fit_causal1 %>% 
  avg_slopes(variables = "overlap_ratio")

mfx <- posterior_draws(mfx) %>% 
  mutate(value = draw / overlap_ratio_sd * 100 * 0.1) %>% 
  select(value) 

p_overlap_posterior1 <- bayestestR::describe_posterior(mfx %>% 
                                              rename("SOZ-resected fraction" = value), 
                                            ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                                            null = 0) 

p_tempa <- marginale_continuous_plot(mfx, 
                                     0, 
                                     "\u2206Pr(good outcome) in %", 
                                     "per 10% \u2206 in SOZ-resected fraction",
                                     "a | average marginal effect")



mfx <- slopes(mod_fit_causal1, variables = 'overlap_ratio') %>% 
  posterior_draws() %>% 
  mutate(draw = draw / overlap_ratio_sd * 100 * 0.1,
         estimate = estimate / overlap_ratio_sd * 100 * 0.1) %>% 
  mutate(overlap_ratio = overlap_ratio * overlap_ratio_sd + overlap_ratio_mean,
         cavity_vol = cavity_vol * cavity_vol_sd + cavity_vol_mean,
         soz_vol = soz_vol * soz_vol_sd + soz_vol_mean) %>% 
  filter(overlap_ratio <= 0.9)

mfx_label <- mfx %>% 
  distinct(soz_vol, cavity_vol, overlap_ratio, estimate) %>% 
  mutate(id = row_number()) %>% 
  filter(id %in% c(106, 124, 131, 133)) %>% 
  #filter(estimate < 3 & overlap_ratio < 0.5) %>% 
  mutate(label = str_c("SOZ = ", format(round(soz_vol, 1), nsmall = 1), "\n", 
                       "cavity = ", format(round(cavity_vol, 1), nsmall = 1))) %>% 
  arrange(overlap_ratio)

p_tempb <- marginale_slop_plot_label(mfx, mfx_label, "overlap_ratio",
                                     "SOZ-resected fraction", 
                                     "\u2206Pr(good outcome) in %", 
                                     "b | marginal effect changes across the original data",
                                     nudge_x = c(0.05, 0.35, 0.2, 0.5),
                                     nudge_y = rep(1, nrow(mfx_label)))

mfx <- predictions(mod_fit_causal1) %>% 
  posterior_draws() %>% 
  mutate(overlap_ratio = overlap_ratio * overlap_ratio_sd + overlap_ratio_mean,
         cavity_vol = cavity_vol * cavity_vol_sd + cavity_vol_mean,
         soz_vol = soz_vol * soz_vol_sd + soz_vol_mean)

mfx_label <- mfx %>% 
  distinct(soz_vol, cavity_vol, overlap_ratio, estimate) %>% 
  filter(overlap_ratio %in% mfx_label$overlap_ratio) %>% 
  mutate(label = str_c("SOZ = ", format(round(soz_vol, 1), nsmall = 1), "\n", 
                       "cavity = ", format(round(cavity_vol, 1), nsmall = 1))) %>% 
  arrange(overlap_ratio)


p_tempc <- marginale_prediction_plot_label(mfx, mfx_label, "overlap_ratio",
                                           "SOZ-resected fraction", 
                                           "Pr(good outcome)", 
                                           "c | posterior predictive simulation with the original data",
                                           nudge_x = c(0.05, 0.3, 0.175, 0.45),
                                           nudge_y = rep(0.07, nrow(mfx_label)))


mapplot1 <- ggarrange(p_tempa$p_temp, p_tempb$p_temp, nrow = 1, ncol = 2,
                      widths = c(0.6, 1))

mapplot <- ggarrange(mapplot1, p_tempc$p_temp, nrow = 2, ncol = 1,
                     widths = c(1, 1), heights = c(1, 1.1))


p_temp_file <- tempfile(tmpdir = "./tmp", fileext = '.png')
agg_png(p_temp_file, width = 30, height = 22.5, units = "in", res = figdpi)
suppressWarnings(print(mapplot))
invisible(dev.off())

knitr::include_graphics(p_temp_file)

Summary of the distribution in panel a
Summary of Posterior Distribution

Parameter             | Median |       95% CI |     pd
------------------------------------------------------
SOZ-resected fraction |   3.71 | [1.22, 6.11] | 99.70%

2.2 Non-SOZ-resected fraction

The effect of Non-SOZ-resected fraction on outcome is subtle, indicating that a higher/lower nonSOZ portion of cavity may not increase/decrease the probability of a good outcome surgery. However, it would be risky to not explicitly describe this interpretation since we are solely referring the median effect value. The uncertainty of the estimated effect is large with a 95% credible interval. To be precise, the overall impact of the nonSOZ portion is generally subtle, but there is notable variation among individual patients.

Marginalization and visualization code
mfx <- mod_fit_causal1 %>% 
  avg_slopes(variables = "nonsoz_ratio")

mfx <- posterior_draws(mfx) %>% 
  mutate(value = draw / nonsoz_ratio_sd * 100 * 0.1) %>% 
  select(value) 

p_nonsoz_posterior1 <- bayestestR::describe_posterior(mfx %>% 
                                                        rename("Non-SOZ-resected fraction" = value), 
                                                      ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                                                      null = 0) 

p_tempa <- marginale_continuous_plot(mfx, 
                                     0, 
                                     "\u2206Pr(good outcome) in %", 
                                     "per 10% \u2206 in Non-SOZ-resected fraction",
                                     "a | average marginal effect")


mfx <- slopes(mod_fit_causal1, variables = 'nonsoz_ratio') %>% 
  posterior_draws() %>% 
  mutate(draw = draw / nonsoz_ratio_sd * 100 * 0.1,
         estimate = estimate / nonsoz_ratio_sd * 100 * 0.1) %>% 
  mutate(nonsoz_ratio = nonsoz_ratio * nonsoz_ratio_sd + nonsoz_ratio_mean,
         cavity_vol = cavity_vol * cavity_vol_sd + cavity_vol_mean,
         soz_vol = soz_vol * soz_vol_sd + soz_vol_mean) %>% 
  filter(nonsoz_ratio <= 0.9)

mfx_label <- mfx %>% 
  distinct(soz_vol, cavity_vol, nonsoz_ratio, estimate) %>% 
  mutate(id = row_number()) %>% 
  filter(id %in% c(53, 68, 70, 74)) %>% 
  #filter(estimate < 0.5) %>% 
  mutate(label = str_c("SOZ = ", format(round(soz_vol, 1), nsmall = 1), "\n", 
                       "cavity = ", format(round(cavity_vol, 1), nsmall = 1))) %>% 
  arrange(nonsoz_ratio)

p_tempb <- marginale_slop_plot_label(mfx, mfx_label, "nonsoz_ratio",
                                     "Non-SOZ-resected fraction", 
                                     "\u2206Pr(good outcome) in %", 
                                     "b | marginal effect changes across the original data",
                                     nudge_x = c(0.4, 0.54, 0.67, 0.8),
                                     nudge_y = rep(-2, nrow(mfx_label)))

mfx <- predictions(mod_fit_causal1) %>% 
  posterior_draws() %>% 
  mutate(nonsoz_ratio = nonsoz_ratio * nonsoz_ratio_sd + nonsoz_ratio_mean,
         cavity_vol = cavity_vol * cavity_vol_sd + cavity_vol_mean,
         soz_vol = soz_vol * soz_vol_sd + soz_vol_mean)

mfx_label <- mfx %>% 
  distinct(soz_vol, cavity_vol, nonsoz_ratio, estimate) %>% 
  mutate(id = row_number()) %>% 
  filter(estimate < 0.3 | estimate > 0.81) %>% 
  mutate(label = str_c("SOZ = ", format(round(soz_vol, 1), nsmall = 1), "\n", 
                       "cavity = ", format(round(cavity_vol, 1), nsmall = 1))) %>% 
  arrange(nonsoz_ratio)


p_tempc <- marginale_prediction_plot_label(mfx, mfx_label, "nonsoz_ratio",
                                           "Non-SOZ-resected fraction", 
                                           "Pr(good outcome)", 
                                           "c | posterior predictive simulation with the original data",
                                           nudge_x = c(0.4, 0.54, 0.8, 0.65, 0.8, 0.95),
                                           nudge_y = c(0.08, 0.08, 0.08, 0.92, 0.92, 0.92))


mapplot1 <- ggarrange(p_tempa$p_temp, p_tempb$p_temp, nrow = 1, ncol = 2,
                      widths = c(0.6, 1))

mapplot <- ggarrange(mapplot1, p_tempc$p_temp, nrow = 2, ncol = 1,
                     widths = c(1, 1), heights = c(1, 1.1))


p_temp_file <- tempfile(tmpdir = "./tmp", fileext = '.png')
agg_png(p_temp_file, width = 30, height = 22.5, units = "in", res = figdpi)
suppressWarnings(print(mapplot))
invisible(dev.off())

knitr::include_graphics(p_temp_file)

Summary of the distribution in panel a
Summary of Posterior Distribution

Parameter                 | Median |        95% CI |     pd
-----------------------------------------------------------
Non-SOZ-resected fraction |   0.75 | [-2.87, 4.36] | 65.54%

2.3 SOZ volume

The total effect of eloquent, epilepsy type, focality, phase I-II, LVFA and SOZ volume on the outcome can be estimated using one model shown in the DAG queries below. All these variables happen to be the adjustment variables for each particular effect of interest. While the DAG identifies necessary adjustment variables, the specific model structure also depends on model complexity and available data sample size. The rationale behind our model selection for the main result section is explained in detail in section 3.

DAG query SOZVolume \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "SOZVolume", "Outcome", effect = "total"))
[[1]]
[1] "Eloquent"       "EpilepsyType"   "Focality"       "Histopathology"
[5] "LVFA"           "Laterality"     "Phase12"       
DAG query Eloquent \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "Eloquent", "Outcome", effect = "total"))
[[1]]
[1] "EpilepsyType"   "Focality"       "Histopathology" "LVFA"          
[5] "Laterality"     "Phase12"        "SOZVolume"     
DAG query EpilepsyType \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "EpilepsyType", "Outcome", effect = "total"))
[[1]]
[1] "Eloquent"       "Focality"       "Histopathology" "LVFA"          
[5] "Laterality"     "Phase12"        "SOZVolume"     
DAG query Focality \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "Focality", "Outcome", effect = "total"))
[[1]]
[1] "Eloquent"       "EpilepsyType"   "Histopathology" "LVFA"          
[5] "Laterality"     "Phase12"        "SOZVolume"     
DAG query Phase I-II \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "Phase12", "Outcome", effect = "total"))
[[1]]
[1] "Eloquent"       "EpilepsyType"   "Focality"       "Histopathology"
[5] "LVFA"           "Laterality"     "SOZVolume"     
DAG query LVFA \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "LVFA", "Outcome", effect = "total"))
[[1]]
[1] "Eloquent"       "EpilepsyType"   "Focality"       "Histopathology"
[5] "Laterality"     "Phase12"        "SOZVolume"     
Model code
mod4rand3fix <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    eloquent + laterality + lvfa + 
    soz_vol : eloquent +  soz_vol : laterality + soz_vol : lvfa + 
    (1 + soz_vol | epilepsy_type) +
    (1 + soz_vol | focality) +
    (1 + soz_vol | pathology) + 
    (1 + soz_vol | phase12),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)

mod4rand3fix
Model fitting results and diagnostics
 Family: bernoulli 
  Links: mu = logit 
Formula: outcome_simplified ~ 1 + soz_vol + eloquent + laterality + lvfa + soz_vol:eloquent + soz_vol:laterality + soz_vol:lvfa + (1 + soz_vol | epilepsy_type) + (1 + soz_vol | focality) + (1 + soz_vol | pathology) + (1 + soz_vol | phase12) 
   Data: df_fit (Number of observations: 163) 
  Draws: 4 chains, each with iter = 3500; warmup = 1000; thin = 1;
         total post-warmup draws = 10000

Multilevel Hyperparameters:
~epilepsy_type (Number of levels: 7) 
                       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(Intercept)              0.24      0.17     0.01     0.64 1.00     6412
sd(soz_vol)                0.24      0.17     0.01     0.64 1.00     6931
cor(Intercept,soz_vol)    -0.04      0.44    -0.81     0.78 1.00    17488
                       Tail_ESS
sd(Intercept)              5854
sd(soz_vol)                4960
cor(Intercept,soz_vol)     7782

~focality (Number of levels: 4) 
                       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(Intercept)              0.18      0.14     0.01     0.53 1.00     9353
sd(soz_vol)                0.25      0.18     0.01     0.67 1.00     6970
cor(Intercept,soz_vol)     0.01      0.45    -0.79     0.81 1.00    14480
                       Tail_ESS
sd(Intercept)              6251
sd(soz_vol)                4963
cor(Intercept,soz_vol)     7774

~pathology (Number of levels: 5) 
                       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(Intercept)              0.25      0.17     0.01     0.63 1.00     5675
sd(soz_vol)                0.21      0.16     0.01     0.59 1.00     7848
cor(Intercept,soz_vol)    -0.06      0.44    -0.83     0.78 1.00    17625
                       Tail_ESS
sd(Intercept)              5139
sd(soz_vol)                4914
cor(Intercept,soz_vol)     7663

~phase12 (Number of levels: 3) 
                       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(Intercept)              0.32      0.18     0.02     0.72 1.00     4737
sd(soz_vol)                0.21      0.17     0.01     0.62 1.00     8818
cor(Intercept,soz_vol)     0.01      0.45    -0.81     0.81 1.00    17147
                       Tail_ESS
sd(Intercept)              4250
sd(soz_vol)                5452
cor(Intercept,soz_vol)     7578

Regression Coefficients:
                             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
Intercept                        0.07      0.43    -0.77     0.95 1.00    10852
soz_vol                         -0.14      0.26    -0.63     0.36 1.00    16570
eloquentyes                      0.06      0.23    -0.40     0.52 1.00    22712
lateralityunilateral             0.12      0.23    -0.33     0.58 1.00    23068
lvfayes                          0.20      0.24    -0.28     0.67 1.00    18665
soz_vol:eloquentyes             -0.08      0.24    -0.56     0.38 1.00    18552
soz_vol:lateralityunilateral    -0.20      0.24    -0.67     0.26 1.00    16068
soz_vol:lvfayes                 -0.08      0.25    -0.56     0.40 1.00    16648
                             Tail_ESS
Intercept                        8206
soz_vol                          7613
eloquentyes                      6931
lateralityunilateral             7404
lvfayes                          7622
soz_vol:eloquentyes              8077
soz_vol:lateralityunilateral     7645
soz_vol:lvfayes                  7589

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

Effect of SOZ volume on the surgery outcome.

Marginalization and visualization code
mfx <- mod4rand3fix %>% 
  avg_slopes(variables = "soz_vol")

mfx <- posterior_draws(mfx) %>% 
  mutate(value = draw / soz_vol_sd * 100) %>% 
  select(value) 

pp_table1 <- bayestestR::describe_posterior(mfx %>% 
                                 rename("SOZ volume" = value), 
                                         ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                                         null = 0) 

p_tempa <- marginale_continuous_plot(mfx, 
                                     0, 
                                     "\u2206Pr(good outcome) in %", 
                                     "per unit \u2206 in SOZ volume",
                                     "a | average marginal effect")


mfx <- slopes(mod4rand3fix, variables = 'soz_vol') %>% 
  posterior_draws() %>% 
  mutate(draw = draw / soz_vol_sd * 100,
         estimate = estimate / soz_vol_sd * 100) %>% 
  mutate(soz_vol = soz_vol * soz_vol_sd + soz_vol_mean) 


p_tempb <- marginale_slop_plot(mfx, "soz_vol",
                               parse(text = "SOZ~volume~(cm^3)"), 
                               "\u2206Pr(good outcome) in %", 
                               "b | marginal effect changes across the original data")

mfx <- predictions(mod4rand3fix, by = 'soz_vol') %>% 
  posterior_draws() %>% 
  mutate(soz_vol = soz_vol * soz_vol_sd + soz_vol_mean)



p_tempc <- marginale_prediction_plot(mfx, "soz_vol",
                                     parse(text = "SOZ~volume~(cm^3)"), 
                                     "Pr(good outcome)", 
                                     "c | posterior predictive simulation with the original data")


mapplot1 <- ggarrange(p_tempa$p_temp, p_tempb$p_temp, nrow = 1, ncol = 2,
                      widths = c(0.6, 1))

mapplot <- ggarrange(mapplot1, p_tempc$p_temp, nrow = 2, ncol = 1,
                     widths = c(1, 1), heights = c(1, 1.1))


p_temp_file <- tempfile(tmpdir = "./tmp", fileext = '.png')
agg_png(p_temp_file, width = 30, height = 22.5, units = "in", res = figdpi)
suppressWarnings(print(mapplot))
invisible(dev.off())

knitr::include_graphics(p_temp_file)

Summary of the distribution in panel a
Summary of Posterior Distribution

Parameter  | Median |         95% CI |     pd
---------------------------------------------
SOZ volume |  -0.78 | [-1.58,  0.00] | 97.32%

2.4 Cavity volume

With the current DAG, it is not possible to estimate the total effect of cavity size through adjustments. This is because there is no way to account for the backdoor path CavityVolume \leftarrow Surgery \rightarrow MissedSOZ \rightarrow Outcome when all the nodes in the path are unmeasured. This limitation is reasonable since we lack information about other factors related to the surgery that could potentially alter the cavity size and also influence the outcome. Without considering these additional sources, it is not possible to eliminate the potential confounding along this causal path.

DAG query CavityVolume \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "CavityVolume", "Outcome", effect = "total"))
[1] "No Way to Block Backdoor Paths"

2.5 Categorical variables

As shown in the DAG query results, the parameters in above SOZ volume model can also be used to interpret the effect of epilepsy type, focality, eloquent, phase I-II and LVFA.

No adjustment variable is required for the total effect of laterality on outcome.

DAG query Laterality \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "Laterality", "Outcome", effect = "total"))
[[1]]
[1] "Backdoor Paths Unconditionally Closed"
Model code
mod_fit_overall <- brm(
  data = df_norm, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + laterality,
  prior = c(prior(normal(0, 1.5), class = Intercept)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)

mod_fit_overall
Model fitting results and diagnostics
 Family: bernoulli 
  Links: mu = logit 
Formula: outcome_simplified ~ 1 + laterality 
   Data: df_norm (Number of observations: 200) 
  Draws: 4 chains, each with iter = 3500; warmup = 1000; thin = 1;
         total post-warmup draws = 10000

Regression Coefficients:
                     Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
Intercept                0.08      0.23    -0.37     0.53 1.00     5281
lateralityunilateral     0.27      0.29    -0.31     0.86 1.00     5463
                     Tail_ESS
Intercept                4579
lateralityunilateral     4420

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Marginalization and visualization code
mfx_all <- avg_predictions(mod_fit_overall)
mfx <- avg_predictions(mod_fit_overall, by = "laterality")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(laterality, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = laterality,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid)

p_temp <- marginale_plot(mfx, title1 = "SEEG laterality",
                         ref = median(mfx_all$overall))

knitr::include_graphics(p_temp$p_temp_file)

Marginalization and visualization code
plot_laterality <- p_temp$p_temp_file
plot_laterality_p <- marginale_plot(mfx %>% select(-overall), title1 = "SEEG laterality",
                         ref = median(mfx_all$overall))$p_temp

pp_table_laterality <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                               null = median(mfx_all$overall)) 

No adjustment variable is required for the total effect of pathology on outcome.

DAG query Histopathology \rightarrow Outcome total effect
print_adjustment_sets(adjustmentSets(dag_full, "Histopathology", "Outcome", effect = "total"))
[[1]]
[1] "Backdoor Paths Unconditionally Closed"
Model code
mod_fit_overall <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + (1 | pathology),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = sd)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)

mod_fit_overall
Model fitting results and diagnostics
 Family: bernoulli 
  Links: mu = logit 
Formula: outcome_simplified ~ 1 + (1 | pathology) 
   Data: df_fit (Number of observations: 163) 
  Draws: 4 chains, each with iter = 3500; warmup = 1000; thin = 1;
         total post-warmup draws = 10000

Multilevel Hyperparameters:
~pathology (Number of levels: 5) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.26      0.16     0.01     0.63 1.00     3588     3331

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept     0.28      0.22    -0.15     0.70 1.00     5459     5458

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Marginalization and visualization code
mfx_all <- avg_predictions(mod_fit_overall)
mfx <- avg_predictions(mod_fit_overall, by = "pathology")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(pathology, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = pathology,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid)

p_pathology <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                               null = median(mfx_all$overall)) 

p_temp <- marginale_plot(mfx, title1 = "histopathology",
                         ref = median(mfx_all$overall))

knitr::include_graphics(p_temp$p_temp_file)

Marginalization and visualization code
plot_pathology <- p_temp$p_temp_file
plot_pathology_p <- marginale_plot(mfx %>% select(-overall), title1 = "histopathology",
                         ref = median(mfx_all$overall))$p_temp
Marginalization and visualization code (epilepsy type)
mfx_all <- avg_predictions(mod4rand3fix)
mfx <- avg_predictions(mod4rand3fix, by = "epilepsy_type")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(epilepsy_type, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = epilepsy_type,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid)

pp_table1 <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                               null = median(mfx_all$overall)) 

p_temp <- marginale_plot(mfx, title1 = "epilepsy type",
                         ref = median(mfx_all$overall))

plot_epilepsy_type_p <- marginale_plot(mfx %>% select(-overall), title1 = "epilepsy type",
                         ref = median(mfx_all$overall))$p_temp
Marginalization and visualization code (focality)
mfx_all <- avg_predictions(mod4rand3fix)
mfx <- avg_predictions(mod4rand3fix, by = "focality")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(focality, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = focality,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid)

pp_table2 <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                               null = median(mfx_all$overall)) 

p_temp <- marginale_plot(mfx, title1 = "focality", 
                         ref = median(mfx_all$overall))


plot_focality_p <- marginale_plot(mfx %>% select(-overall), title1 = "focality", 
                         ref = median(mfx_all$overall))$p_temp
Marginalization and visualization code (eloquent)
mfx_all <- avg_predictions(mod4rand3fix)
mfx <- avg_predictions(mod4rand3fix, by = "eloquent")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(eloquent, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = eloquent,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid)

pp_table3 <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                               null = median(mfx_all$overall)) 

p_temp <- marginale_plot(mfx, title1 = "eloquent",
                         ref = median(mfx_all$overall))


plot_eloquent_p <- marginale_plot(mfx %>% select(-overall), title1 = "eloquent",
                         ref = median(mfx_all$overall))$p_temp
Marginalization and visualization code (phase12)
mfx_all <- avg_predictions(mod4rand3fix)
mfx <- avg_predictions(mod4rand3fix, by = "phase12")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(phase12, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = phase12,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid) %>% 
  select(overall, discordant, partial, complete)

pp_table4 <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                               null = median(mfx_all$overall)) 

p_temp <- marginale_plot(mfx, title1 = "Phase I-II", 
                         ref = median(mfx_all$overall))

plot_phase12_p <- marginale_plot(mfx %>% select(-overall), title1 = "Phase I-II", 
                         ref = median(mfx_all$overall))$p_temp
Marginalization and visualization code (LVFA)
mfx_all <- avg_predictions(mod4rand3fix)
mfx <- avg_predictions(mod4rand3fix, by = "lvfa")

mfx_all <- posterior_draws(mfx_all) %>% 
  select(draw, drawid) %>% 
  rename(overall = draw)
mfx <- posterior_draws(mfx) %>% 
  select(lvfa, draw, drawid) %>% 
  pivot_wider(values_from = draw,
              names_from = lvfa,
              id_cols = drawid) %>% 
  left_join(mfx_all, by = "drawid") %>% 
  select(-drawid) 

pp_table5 <- bayestestR::describe_posterior(mfx, ci_method = "hdi", ci = 0.95, test = c("p_direction"), 
                                            null = median(mfx_all$overall)) 

p_temp <- marginale_plot(mfx, title1 = "LVFA", 
                         ref = median(mfx_all$overall))

plot_lvfa_p <- marginale_plot(mfx %>% select(-overall), title1 = "LVFA", 
                                 ref = median(mfx_all$overall))$p_temp

Summary of the distribution (epilepsy type)
Summary of Posterior Distribution

Parameter | Median |       95% CI |     pd
------------------------------------------
FLE       |   0.61 | [0.50, 0.72] | 74.94%
FLE+      |   0.47 | [0.34, 0.62] | 90.60%
OIE       |   0.62 | [0.46, 0.76] | 72.75%
PE        |   0.69 | [0.55, 0.84] | 95.02%
PE+       |   0.46 | [0.22, 0.69] | 81.57%
TLE       |   0.58 | [0.48, 0.67] | 57.47%
TLE+      |   0.46 | [0.30, 0.61] | 93.85%
overall   |   0.57 | [0.50, 0.64] | 50.00%
Summary of the distribution (focality)
Summary of Posterior Distribution

Parameter  | Median |       95% CI |     pd
-------------------------------------------
focal      |   0.61 | [0.50, 0.71] | 76.43%
multifocal |   0.50 | [0.36, 0.62] | 87.36%
regional   |   0.62 | [0.54, 0.71] | 87.36%
widespread |   0.33 | [0.18, 0.48] | 99.70%
overall    |   0.57 | [0.50, 0.64] | 50.00%
Summary of the distribution (laterality)
Summary of Posterior Distribution

Parameter  | Median |       95% CI |     pd
-------------------------------------------
bilateral  |   0.52 | [0.41, 0.63] | 76.96%
unilateral |   0.59 | [0.50, 0.67] | 71.11%
overall    |   0.56 | [0.49, 0.63] | 50.00%
Summary of the distribution (eloquent)
Summary of Posterior Distribution

Parameter | Median |       95% CI |     pd
------------------------------------------
no        |   0.59 | [0.51, 0.67] | 67.92%
yes       |   0.54 | [0.45, 0.63] | 73.18%
overall   |   0.57 | [0.50, 0.64] | 50.00%
Summary of the distribution (pathology)
Summary of Posterior Distribution

Parameter    | Median |       95% CI |     pd
---------------------------------------------
FCD II       |   0.61 | [0.51, 0.73] | 78.41%
gliosis|scar |   0.53 | [0.38, 0.64] | 75.33%
HS           |   0.57 | [0.45, 0.70] | 51.78%
LEAT|VM      |   0.58 | [0.44, 0.74] | 58.90%
other MCD    |   0.54 | [0.43, 0.65] | 67.54%
overall      |   0.57 | [0.50, 0.65] | 50.00%
Summary of the distribution (phase I-II)
Summary of Posterior Distribution

Parameter  | Median |       95% CI |     pd
-------------------------------------------
overall    |   0.57 | [0.50, 0.64] | 50.00%
discordant |   0.59 | [0.43, 0.72] | 61.94%
partial    |   0.51 | [0.42, 0.61] | 89.91%
complete   |   0.66 | [0.55, 0.78] | 92.95%
Summary of the distribution (LVFA)
Summary of Posterior Distribution

Parameter | Median |       95% CI |     pd
------------------------------------------
no        |   0.53 | [0.42, 0.64] | 78.08%
yes       |   0.59 | [0.51, 0.66] | 65.86%
overall   |   0.57 | [0.50, 0.64] | 50.00%

3 Part III: Prior and model selections

As mentioned previously, for a categorical variable we model it as the intercept in the logistic regression. The model fitting results and diagnostics (Gabry et al. 2019) are also shown below with the estimated parameter distribution of every coefficient (e.g., posterior mean and its standard error). We can use epilepsy type as an example and the model code is shown below.

Model code
mod_fit_overall <- brm(
  data = df_norm, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + epilepsy_type,
  prior = c(prior(normal(0, 1.5), class = b)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)

Note that we do choose normal(0, 1.5) as the prior for the intercept coefficients (unfold the above code to see it). Is not it too narrow? In other words, we may be challenged that this prior could “bias” the model in the way that it is subjectively narrow. Let us explore this prior to see if it is “subjective”. We start with a flat prior like normal(0, 500) which is very wide and can be considered as “objective” since it says we almost know nothing about this parameter, it could be any value from this wide range distribution and the mean of these values is 0. Then we use this prior and see what would be the probability of achieving the good surgical outcome. This technique in the Bayesian workflow is called prior predictive simulation/check (Gelman et al. 2020) which is used to valid the prior choice by simulations without any real data. Additionally, this highlights another advantage of the Bayesian approach over conventional methods. Bayesian inference encourages us to explicitly state, consider, and describe our assumptions, which are then encapsulated within the model equations and priors.

To conduct prior predictive simulation, we calculate the output of the model (good or bad surgical outcome) by giving one value of the intercept randomly chosen from the “objective” prior. Then repeat this process many times (i.e., 16000) to see the probability of good surgical outcome calculated using all outputs of the logistic regression equations. Since we do not use any data and prior is “objective”, we should expect the distribution of the probability of the good surgical outcome being centered at around 0.5 and has no obvious preference on any probability value. Surprisingly, as shown below this “objective” prior prefers good outcome probability to be either close to 0 or 1, and the probability mass near 0.5 is very low. This so called “objective” prior is not “objective”.

How about the prior we selected in the model - normal(0, 1.5)? Does it produce fair predictions without any data? As shown below, this prior provides much more reasonable outcome probability than the imagined “objective” prior above. It does not prefer any particular probability value from 0 to 1. Keep in mind that this is without any real data, a fair model should not have preference before knowing the data if objectivity is the major consideration. When fitting the model with data we are actually balancing the prior and data, if a prior is bad and sample size is small (less informative), wrong inference can be made. This is one reason why Bayesian methods are effective for handling small sample sizes: we validate the prior rather than assuming it is inherently fair. Note that this does not mean the conventional logistic regression is wrong. It is correct when we have infinite data (i.e., infinite information), or asymptotically infinite data, since prior does not matter anymore in this situation. Note that this normal(0, 1.5) prior is referred to as the weakly informative prior considering it is neither flat nor informative (e.g., setting the prior close to the expected value).

3.1 The SOZ-resected fraction model

Prior predictive simulation code
mod_fit_causal_flatprior <- brm(
  data = df_norm,
  family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + cavity_vol + soz_vol + overlap_ratio,
  prior = c(prior(normal(0, 500), class = Intercept),
            prior(normal(0, 500), class = b)),  
  iter = 5000,
  warmup = 1000,
  chains = 4,
  cores = 4,
  seed = 19850913,
  sample_prior = "only"
)

For our prior chosen based on prior predictive checks - normal(0, 0.3), it provides a good outcome probability distribution that is centered around 0.5, and gradually decays to extremely high or low probability values.

Prior predictive simulation code
mod_fit_causal_prior <- brm(
  data = df_norm,
  family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + cavity_vol + soz_vol + overlap_ratio,
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.5), class = b)),  
  iter = 5000,
  warmup = 1000,
  chains = 4,
  cores = 4,
  seed = 19850913,
  sample_prior = "only"
)

3.2 The SOZ volume model

In the SOZ volume model, the flat priors decays faster to extreme values (high or low) compared to the weakly informative priors used in our analysis.

Flat prior

Prior predictive simulation code
mod4rand3fix <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    eloquent + laterality + lvfa + 
    soz_vol : eloquent +  soz_vol : laterality + soz_vol : lvfa + 
    (1 + soz_vol | epilepsy_type) +
    (1 + soz_vol | focality) +
    (1 + soz_vol | pathology) + 
    (1 + soz_vol | phase12),
  prior = c(prior(normal(0, 500), class = Intercept),
            prior(normal(0, 500), class = b),
            prior(normal(0, 500), class = sd),
            prior(lkj(1), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 5000, warmup = 1000, chains = 4, cores = 4, sample_prior = "only", seed = 19850913) 

Weakly informative prior

Prior predictive simulation code
mod4rand3fix <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    eloquent + laterality + lvfa + 
    soz_vol : eloquent +  soz_vol : laterality + soz_vol : lvfa + 
    (1 + soz_vol | epilepsy_type) +
    (1 + soz_vol | focality) +
    (1 + soz_vol | pathology) + 
    (1 + soz_vol | phase12),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 5000, warmup = 1000, chains = 4, cores = 4, sample_prior = "only", seed = 19850913) 

3.3 Model comparisons

There were many potential approaches to building a model considering SOZ volume, epilepsy type, focality, eloquent, phase I-II, LVFA and pathology. We could have included complex interactions between categorical variables, but choose to prioritize reliable estimation given our data size. Therefore, our main results model focuses on interactions between SOZ volume and other factors. Additionally, we apply hierarchical structure only to factors with more than two levels (epilepsy type, focality, phase I-II, and pathology), as two levels are insufficient for hierarchical modeling. These decisions reflect a balance between model complexity and accuracy given the available data.

We rigorously justify the model selection by building six alternative models with varying interactions (none, only 7-way, or all possible 1-7 interactions) and hierarchical structures (complete pooling, or partial pooling). To assess generalizability, we employed Bayesian Leave-One-Out Cross-Validation (LOO-CV see Vehtari, Gelman, and Gabry 2017) with Pareto smoothed importance sampling (PSIS). The diagnostic of PSIS also provides an assessment of model misspecification and erroneous data points (Gabry et al. 2019). Following the criteria suggested in Sivula et al. (2020), we compared models based on their expected log pointwise predictive density. The differences in expected log pointwise predictive density are less than 4 indicating that the models do not differ statistically in terms of generalizability. The more complicated models may have the misspecification issues indicated by the diagnostic of PSIS (e.g., warning message showing pareto_k > 0.7). Bayes R² is also used to assess how well the model fit the data.

Model code
mod7factor <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    (1 + soz_vol | laterality) + 
    (1 + soz_vol | epilepsy_type) +
    (1 + soz_vol | eloquent) +
    (1 + soz_vol | focality) +
    (1 + soz_vol | pathology) + 
    (1 + soz_vol | phase12) + 
    (1 + soz_vol | lvfa),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)


mod7wayinteractions <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    (1 + soz_vol | interaction),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)


mod7factor7wayinteractions <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    (1 + soz_vol | laterality) + 
    (1 + soz_vol | epilepsy_type) +
    (1 + soz_vol | eloquent) +
    (1 + soz_vol | focality) +
    (1 + soz_vol | pathology) +
    (1 + soz_vol | phase12) +
    (1 + soz_vol | lvfa) +
    (1 + soz_vol | interaction),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)


modmonster <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    (1 + soz_vol | laterality * epilepsy_type * eloquent * focality * pathology * phase12 * lvfa) ,
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)


mod4rand3fix <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + 
    eloquent + laterality + lvfa + 
    soz_vol : eloquent +  soz_vol : laterality + soz_vol : lvfa + 
    (1 + soz_vol | epilepsy_type) +
    (1 + soz_vol | focality) +
    (1 + soz_vol | pathology) + 
    (1 + soz_vol | phase12),
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b),
            prior(normal(0, 0.3), class = sd),
            prior(lkj(2), class = cor)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)


modallfix <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + laterality + epilepsy_type + 
    eloquent + focality + pathology + phase12 + lvfa +
    soz_vol : laterality +  
    soz_vol : epilepsy_type + 
    soz_vol : eloquent +
    soz_vol : focality +
    soz_vol : pathology + 
    soz_vol : phase12 + 
    soz_vol : lvfa,
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)

modallfixnointeraction <- brm(
  data = df_fit, family = bernoulli(link = "logit"),
  outcome_simplified ~ 1 + soz_vol + laterality + epilepsy_type + 
    eloquent + focality + pathology + phase12 + lvfa,
  prior = c(prior(normal(0, 1.5), class = Intercept),
            prior(normal(0, 0.3), class = b)),
  control = list(adapt_delta = 0.99),  
  iter = 3500, warmup = 1000, chains = 4, cores = 4, seed = 19850913)
LOO-CV and diagnostic code
mod7factor <- add_criterion(mod7factor, "loo")
mod7factor <- add_criterion(mod7factor, "bayes_R2")
LOO-CV and diagnostic code
mod7wayinteractions <- add_criterion(mod7wayinteractions, "loo")
Warning: Found 1 observations with a pareto_k > 0.7 in model
'mod7wayinteractions'. We recommend to set 'moment_match = TRUE' in order to
perform moment matching for problematic observations.
LOO-CV and diagnostic code
mod7wayinteractions <- add_criterion(mod7wayinteractions, "bayes_R2")
LOO-CV and diagnostic code
mod7factor7wayinteractions <- add_criterion(mod7factor7wayinteractions, "loo")
Warning: Found 1 observations with a pareto_k > 0.7 in model
'mod7factor7wayinteractions'. We recommend to set 'moment_match = TRUE' in
order to perform moment matching for problematic observations.
LOO-CV and diagnostic code
mod7factor7wayinteractions <- add_criterion(mod7factor7wayinteractions, "bayes_R2")
LOO-CV and diagnostic code
modmonster <- add_criterion(modmonster, "loo")
Warning: Found 49 observations with a pareto_k > 0.7 in model 'modmonster'. We
recommend to set 'moment_match = TRUE' in order to perform moment matching for
problematic observations.
LOO-CV and diagnostic code
modmonster <- add_criterion(modmonster, "bayes_R2")
LOO-CV and diagnostic code
mod4rand3fix <- add_criterion(mod4rand3fix, "loo")
mod4rand3fix <- add_criterion(mod4rand3fix, "bayes_R2")
LOO-CV and diagnostic code
modallfix <- add_criterion(modallfix, "loo")
modallfix <- add_criterion(modallfix, "bayes_R2")
LOO-CV and diagnostic code
modallfixnointeraction <- add_criterion(modallfixnointeraction, "loo")
modallfixnointeraction <- add_criterion(modallfixnointeraction, "bayes_R2")

According to the warning messages above, some more complicated models exhibit pathological diagnostics, indicating either model misspecification or erroneous data points. Therefore, these models are excluded from the model selection.

As shown below, no model stand out statistically in terms of the predictability/generalizability (no difference larger than 4). Note that the table below used the 1st model as the reference and computed the difference in expected log pointwise predictive density for other models. All models are ranked by the predictability from the best to the worst.

Model comparision results
                       elpd_diff se_diff
modallfixnointeraction  0.0       0.0   
modallfix              -0.4       0.9   
mod4rand3fix           -1.2       1.1   
mod7factor             -1.5       1.0   

As no model shows significantly better generalizability, we opt for the one with the highest Bayes R² value, the mod4rand3fix model, indicating the best fit to the data (see the table below). Its hierarchical structure is also a more reasonable choice, given the sample size at each level and the total number of factors in the model.

Model comparision results
                       bayes_R2 bayes_R2_se
mod4rand3fix              0.114       0.031
mod7factor                0.113       0.032
modallfix                 0.109       0.024
modallfixnointeraction    0.083       0.024

References

Cinelli, Carlos, Andrew Forney, and Judea Pearl. 2020. “A Crash Course in Good and Bad Controls.” SSRN Electronic Journal, 1–27. https://doi.org/10.2139/ssrn.3689437.
Gabry, Jonah, Daniel Simpson, Aki Vehtari, Michael Betancourt, and Andrew Gelman. 2019. “Visualization in Bayesian Workflow.” Journal of the Royal Statistical Society. Series A: Statistics in Society 182 (2): 389–402. https://doi.org/10.1111/rssa.12378.
Gelman, Andrew, Aki Vehtari, Daniel Simpson, Charles C. Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul Christian Bürkner, and Martin Modrák. 2020. “Bayesian Workflow.” arXiv.
Sivula, Tuomas, Maans Magnusson, Asael Alonzo Matamoros, Aki Vehtari Aalto University Finland, and U. Sweden. 2020. “Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison.” arXiv: Methodology, August. https://www.semanticscholar.org/paper/Uncertainty-in-Bayesian-Leave-One-Out-Based-Model-Sivula-Magnusson/3a63d47d302f7e704724695ef47b183793dcac65.
Vehtari, Aki, Andrew Gelman, and Jonah Gabry. 2017. “Practical Bayesian Model Evaluation Using Leave-One-Out Cross-Validation and WAIC.” Statistics and Computing 27 (5): 1413–32. https://doi.org/10.1007/s11222-016-9696-4.

License

The work is copyrighted by the ANPHY Lab and licensed under the BSD 3-clauselicense.

Copyright (c), 2024, The ANPHY Lab.

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Code created by Zhengchen Cai, Email: zhengchen.cai@mcgill.ca

Computing environment

sessionInfo()
R version 4.4.1 (2024-06-14 ucrt)
Platform: x86_64-w64-mingw32/x64
Running under: Windows 11 x64 (build 22631)

Matrix products: default


locale:
[1] LC_COLLATE=English_United States.utf8 
[2] LC_CTYPE=English_United States.utf8   
[3] LC_MONETARY=English_United States.utf8
[4] LC_NUMERIC=C                          
[5] LC_TIME=English_United States.utf8    

time zone: America/Toronto
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] brainspriteR_1.0.0.9000 ggrepel_0.9.5           marginaleffects_0.21.0 
 [4] ggsci_3.2.0             ggforce_0.4.2           moonBook_0.3.1         
 [7] kableExtra_1.4.0        ggdag_0.2.12            dagitty_0.3-4          
[10] DT_0.33                 webr_0.1.5              tidybayes_3.0.6        
[13] brms_2.21.0             Rcpp_1.0.12             ggstatsplot_0.12.3     
[16] gt_0.10.1               readxl_1.4.3            ragg_1.3.2             
[19] scales_1.3.0            ggpubr_0.6.0            lubridate_1.9.3        
[22] forcats_1.0.0           stringr_1.5.1           dplyr_1.1.4            
[25] purrr_1.0.2             readr_2.1.5             tidyr_1.3.1            
[28] tibble_3.2.1            ggplot2_3.5.1           tidyverse_2.0.0        

loaded via a namespace (and not attached):
  [1] svUnit_1.0.6            later_1.3.2             cellranger_1.1.0       
  [4] polyclip_1.10-6         datawizard_0.11.0.5     lifecycle_1.0.4        
  [7] rstatix_0.7.2           StanHeaders_2.32.9      vroom_1.6.5            
 [10] lattice_0.22-6          MASS_7.3-60.2           insight_0.20.1.10      
 [13] ggdist_3.3.2            backports_1.5.0         magrittr_2.0.3         
 [16] vcd_1.4-12              openxlsx_4.2.5.2        sass_0.4.9             
 [19] rmarkdown_2.27          jquerylib_0.1.4         yaml_2.3.8             
 [22] collapse_2.0.14         httpuv_1.6.15           zip_2.3.1              
 [25] askpass_1.2.0           pkgbuild_1.4.4          cowplot_1.1.3          
 [28] RColorBrewer_1.1-3      abind_1.4-5             ggraph_2.2.1           
 [31] tensorA_0.36.2.1        tweenr_2.0.3            gdtools_0.3.7          
 [34] inline_0.3.19           correlation_0.8.5       crul_1.4.2             
 [37] bridgesampling_1.1-2    svglite_2.1.3           codetools_0.2-20       
 [40] xml2_1.3.6              SuppDists_1.1-9.7       tidyselect_1.2.1       
 [43] bayesplot_1.11.1        httpcode_0.3.0          farver_2.1.2           
 [46] viridis_0.6.5           gmp_0.7-4               effectsize_0.8.8.2     
 [49] shinyWidgets_0.8.6      matrixStats_1.3.0       stats4_4.4.1           
 [52] rrtable_0.3.0           jsonlite_1.8.8          tidygraph_1.3.1        
 [55] emmeans_1.10.3          systemfonts_1.1.0       BWStest_0.2.3          
 [58] tools_4.4.1             PMCMRplus_1.9.10        rio_1.1.1              
 [61] glue_1.7.0              mnormt_2.1.1            gridExtra_2.3          
 [64] xfun_0.45               kSamples_1.2-10         distributional_0.4.0   
 [67] loo_2.7.0               withr_3.0.0             fastmap_1.2.0          
 [70] boot_1.3-30             fansi_1.0.6             openssl_2.2.0          
 [73] digest_0.6.36           timechange_0.3.0        R6_2.5.1               
 [76] mime_0.12               estimability_1.5.1      textshaping_0.4.0      
 [79] colorspace_2.1-0        utf8_1.2.4              generics_0.1.3         
 [82] fontLiberation_0.1.0    data.table_1.15.4       graphlayouts_1.1.1     
 [85] htmlwidgets_1.6.4       parameters_0.22.0.1     ztable_0.2.3           
 [88] pkgconfig_2.0.3         gtable_0.3.5            Rmpfr_0.9-5            
 [91] statsExpressions_1.5.4  lmtest_0.9-40           htmltools_0.5.8.1      
 [94] fontBitstreamVera_0.1.1 carData_3.0-5           rvg_0.3.3              
 [97] multcompView_0.1-10     editData_0.1.8          posterior_1.5.0        
[100] knitr_1.47              rstudioapi_0.16.0       reshape2_1.4.4         
[103] tzdb_0.4.0              uuid_1.2-0              coda_0.19-4.1          
[106] checkmate_2.3.1         nlme_3.1-164            curl_5.2.1             
[109] zoo_1.8-12              cachem_1.1.0            flextable_0.9.6        
[112] sjlabelled_1.2.0        parallel_4.4.1          miniUI_0.1.1.1         
[115] pillar_1.9.0            grid_4.4.1              vctrs_0.6.5            
[118] promises_1.3.0          car_3.1-2               arrayhelpers_1.1-0     
[121] xtable_1.8-4            paletteer_1.6.0         evaluate_0.24.0        
[124] zeallot_0.1.0           mvtnorm_1.2-5           cli_3.6.3              
[127] compiler_4.4.1          rlang_1.1.4             crayon_1.5.3           
[130] rstantools_2.4.0        ggsignif_0.6.4          labeling_0.4.3         
[133] rematch2_2.1.2          plyr_1.8.9              sjmisc_2.8.10          
[136] stringi_1.8.4           rstan_2.32.6            psych_2.4.6.26         
[139] viridisLite_0.4.2       QuickJSR_1.2.2          munsell_0.5.1          
[142] Brobdingnag_1.2-9       bayestestR_0.13.2.3     V8_4.4.2               
[145] fontquiver_0.2.1        Matrix_1.7-0            hms_1.1.3              
[148] patchwork_1.2.0         bit64_4.0.5             gfonts_0.2.0           
[151] devEMF_4.4-2            shiny_1.8.1.1           memoise_2.0.1          
[154] igraph_2.0.3            broom_1.0.6             RcppParallel_5.1.7     
[157] bslib_0.7.0             bit_4.0.5               officer_0.6.6